As artificial intelligence (AI) advances, I am seeing a lot of discussion on LinkedIn and in the online media about the advantages it may bring for either the threat actors (“batten down the hatches, we are all doomed”) or the security defence teams (“it’s OK, relax, AI has you covered”). It has occurred to me that while AI could be a powerful tool for cyber criminals, allowing them to automate attacks and evade detection more effectively, it could also be a powerful tool for security professionals, allowing them to detect and respond to threats more quickly and efficiently.
The polarising predictions have also got me wondering … might the impact of AI advances in cybersecurity end up balancing out? Could every advance made by the bad guys be met with equal progress from the good guys? All supported by the same tools? Of course this balancing act only works as long as everyone keeps up with the competition; as cyber criminals become more adept at using AI, security professionals will also have to ensure they are making use of more advanced tools and techniques to defend against these attacks.
So in a boxing match style, let’s take a look at who is in the red corner and who is in the blue. Exactly how might cybercriminals and security professionals make use of AI? And who will win?
In the red corner: The Cyber Criminals
- Using AI for the identification of targets; scanning the internet for vulnerable systems
- Programming AI-bots to mimic human behaviour in order to more effectively evade detection by security systems
- Generating highly targeted phishing emails with AI, perhaps trained by multiple data sets acquired on the Dark Web so that they include credible details to help lure the target and build trust (with the public becoming more used to interacting with AI-bots for customer service, impersonating these chatbots could become a useful social engineering tool for malicious actors).
- Creating evermore sophisticated malware, such as using AI to find exploitable patterns in security systems and creating malware that is specifically designed to evade detection, or designing AI-powered evolution into malware, so that malicious programmes adapt and evolve over time, making them more difficult to detect and remove.
In the blue corner – The Security Professionals
- Analysing vast amounts of data from multiple sources to identify and track potential threats. Threat Intelligence systems can also learn from past incidents, allowing them to adapt and improve over time. (Generally I would expect much of this AI intelligence gathering to be done by security vendors, and made available to customers and community members.)
- Offering just-in-time training to the workforce before an incident occurs by identifying risky behavioural patterns, and guiding employees to make better decisions for data protection and system security
- Triage; sorting through security incidents and prioritising them based on their level of risk. Using AI recommendations to focus efforts on the most critical issues.
- Detecting patterns that indicate a potential security incident, then automatically triggering a response and alert to security teams (recently proven in its efficacy by NATO).
- Automating incident investigation; helping identify the root cause of an incident and notifying relevant parties.
So who will be the winner in this title bout? I am not sure that bookies have a favourite just yet. AI has the potential to revolutionise cybersecurity, but it won’t remove the requirement for a clear architecture and strategy. We don’t get to outsource our jobs to the machines and go and make a cuppa just yet. Over the coming months and years, it will be important to understand that AI is not a cure-all, or a standalone solution, but rather a complementary tool to be used in combination with other security measures. Just like a human security team, AI requires continuous monitoring, evaluation, and tuning to ensure that it is performing as expected and to address any bias or inaccuracies in the data. And of course there are plenty of ethical considerations to be handled too. If you have enjoyed this post, I recommend hopping into the archive and reading this one too, from our EMEA CISO Neil Thacker, in which he discusses the EU’s upcoming AI Act, and gives some useful tips for organisations looking to prepare for the inevitable legislation around AI this year.